244 research outputs found

    3D Multimodal Interaction with Physically-based Virtual Environments

    Get PDF
    The virtual has become a huge field of exploration for researchers: it could assist the surgeon, help the prototyping of industrial objects, simulate natural phenomena, be a fantastic time machine or entertain users through games or movies. Far beyond the only visual rendering of the virtual environment, the Virtual Reality aims at -literally- immersing the user in the virtual world. VR technologies simulate digital environments with which users can interact and, as a result, perceive through different modalities the effects of their actions in real time. The challenges are huge: the user's motions need to be perceived and to have an immediate impact on the virtual world by modifying the objects in real-time. In addition, the targeted immersion of the user is not only visual: auditory or haptic feedback needs to be taken into account, merging all the sensory modalities of the user into a multimodal answer. The global objective of my research activities is to improve 3D interaction with complex virtual environments by proposing novel approaches for physically-based and multimodal interaction. I have laid the foundations of my work on designing the interactions with complex virtual worlds, referring to a higher demand in the characteristics of the virtual environments. My research could be described within three main research axes inherent to the 3D interaction loop: (1) the physically-based modeling of the virtual world to take into account the complexity of the virtual object behavior, their topology modifications as well as their interactions, (2) the multimodal feedback for combining the sensory modalities into a global answer from the virtual world to the user and (3) the design of body-based 3D interaction techniques and devices for establishing the interfaces between the user and the virtual world. All these contributions could be gathered in a general framework within the 3D interaction loop. By improving all the components of this framework, I aim at proposing approaches that could be used in future virtual reality applications but also more generally in other areas such as medical simulation, gesture training, robotics, virtual prototyping for the industry or web contents.Le virtuel est devenu un vaste champ d'exploration pour la recherche et offre de nos jours de nombreuses possibilités : assister le chirurgien, réaliser des prototypes de pièces industrielles, simuler des phénomènes naturels, remonter dans le temps ou proposer des applications ludiques aux utilisateurs au travers de jeux ou de films. Bien plus que le rendu purement visuel d'environnement virtuel, la réalité virtuelle aspire à -littéralement- immerger l'utilisateur dans le monde virtuel. L'utilisateur peut ainsi interagir avec le contenu numérique et percevoir les effets de ses actions au travers de différents retours sensoriels. Permettre une véritable immersion de l'utilisateur dans des environnements virtuels de plus en plus complexes confronte la recherche en réalité virtuelle à des défis importants: les gestes de l'utilisateur doivent être capturés puis directement transmis au monde virtuel afin de le modifier en temps-réel. Les retours sensoriels ne sont pas uniquement visuels mais doivent être combinés avec les retours auditifs ou haptiques dans une réponse globale multimodale. L'objectif principal de mes activités de recherche consiste à améliorer l'interaction 3D avec des environnements virtuels complexes en proposant de nouvelles approches utilisant la simulation physique et exploitant au mieux les différentes modalités sensorielles. Dans mes travaux, je m'intéresse tout particulièrement à concevoir des interactions avec des mondes virtuels complexes. Mon approche peut être décrite au travers de trois axes principaux de recherche: (1) la modélisation dans les mondes virtuels d'environnements physiques plausibles où les objets réagissent de manière naturelle, même lorsque leur topologie est modifiée ou lorsqu'ils sont en interaction avec d'autres objets, (2) la mise en place de retours sensoriels multimodaux vers l'utilisateur intégrant des composantes visuelles, haptiques et/ou sonores, (3) la prise en compte de l'interaction physique de l'utilisateur avec le monde virtuel dans toute sa richesse : mouvements de la tête, des deux mains, des doigts, des jambes, voire de tout le corps, en concevant de nouveaux dispositifs ou de nouvelles techniques d'interactions 3D. Les différentes contributions que j'ai proposées dans chacun de ces trois axes peuvent être regroupées au sein d'un cadre plus général englobant toute la boucle d'interaction 3D avec les environnements virtuels. Elles ouvrent des perspectives pour de futures applications en réalité virtuelle mais également plus généralement dans d'autres domaines tels que la simulation médicale, l'apprentissage de gestes, la robotique, le prototypage virtuel pour l'industrie ou bien les contenus web

    3D Multimodal Interaction with Physically-based Virtual Environments

    Get PDF
    The virtual has become a huge field of exploration for researchers: it could assist the surgeon, help the prototyping of industrial objects, simulate natural phenomena, be a fantastic time machine or entertain users through games or movies. Far beyond the only visual rendering of the virtual environment, the Virtual Reality aims at -literally- immersing the user in the virtual world. VR technologies simulate digital environments with which users can interact and, as a result, perceive through different modalities the effects of their actions in real time. The challenges are huge: the user's motions need to be perceived and to have an immediate impact on the virtual world by modifying the objects in real-time. In addition, the targeted immersion of the user is not only visual: auditory or haptic feedback needs to be taken into account, merging all the sensory modalities of the user into a multimodal answer. The global objective of my research activities is to improve 3D interaction with complex virtual environments by proposing novel approaches for physically-based and multimodal interaction. I have laid the foundations of my work on designing the interactions with complex virtual worlds, referring to a higher demand in the characteristics of the virtual environments. My research could be described within three main research axes inherent to the 3D interaction loop: (1) the physically-based modeling of the virtual world to take into account the complexity of the virtual object behavior, their topology modifications as well as their interactions, (2) the multimodal feedback for combining the sensory modalities into a global answer from the virtual world to the user and (3) the design of body-based 3D interaction techniques and devices for establishing the interfaces between the user and the virtual world. All these contributions could be gathered in a general framework within the 3D interaction loop. By improving all the components of this framework, I aim at proposing approaches that could be used in future virtual reality applications but also more generally in other areas such as medical simulation, gesture training, robotics, virtual prototyping for the industry or web contents.Le virtuel est devenu un vaste champ d'exploration pour la recherche et offre de nos jours de nombreuses possibilités : assister le chirurgien, réaliser des prototypes de pièces industrielles, simuler des phénomènes naturels, remonter dans le temps ou proposer des applications ludiques aux utilisateurs au travers de jeux ou de films. Bien plus que le rendu purement visuel d'environnement virtuel, la réalité virtuelle aspire à -littéralement- immerger l'utilisateur dans le monde virtuel. L'utilisateur peut ainsi interagir avec le contenu numérique et percevoir les effets de ses actions au travers de différents retours sensoriels. Permettre une véritable immersion de l'utilisateur dans des environnements virtuels de plus en plus complexes confronte la recherche en réalité virtuelle à des défis importants: les gestes de l'utilisateur doivent être capturés puis directement transmis au monde virtuel afin de le modifier en temps-réel. Les retours sensoriels ne sont pas uniquement visuels mais doivent être combinés avec les retours auditifs ou haptiques dans une réponse globale multimodale. L'objectif principal de mes activités de recherche consiste à améliorer l'interaction 3D avec des environnements virtuels complexes en proposant de nouvelles approches utilisant la simulation physique et exploitant au mieux les différentes modalités sensorielles. Dans mes travaux, je m'intéresse tout particulièrement à concevoir des interactions avec des mondes virtuels complexes. Mon approche peut être décrite au travers de trois axes principaux de recherche: (1) la modélisation dans les mondes virtuels d'environnements physiques plausibles où les objets réagissent de manière naturelle, même lorsque leur topologie est modifiée ou lorsqu'ils sont en interaction avec d'autres objets, (2) la mise en place de retours sensoriels multimodaux vers l'utilisateur intégrant des composantes visuelles, haptiques et/ou sonores, (3) la prise en compte de l'interaction physique de l'utilisateur avec le monde virtuel dans toute sa richesse : mouvements de la tête, des deux mains, des doigts, des jambes, voire de tout le corps, en concevant de nouveaux dispositifs ou de nouvelles techniques d'interactions 3D. Les différentes contributions que j'ai proposées dans chacun de ces trois axes peuvent être regroupées au sein d'un cadre plus général englobant toute la boucle d'interaction 3D avec les environnements virtuels. Elles ouvrent des perspectives pour de futures applications en réalité virtuelle mais également plus généralement dans d'autres domaines tels que la simulation médicale, l'apprentissage de gestes, la robotique, le prototypage virtuel pour l'industrie ou bien les contenus web

    Combining Brain-Computer Interfaces and Haptics: Detecting Mental Workload to Adapt Haptic Assistance

    Get PDF
    In this paper we introduce the combined use of Brain-Computer Interfaces (BCI) and Haptic interfaces. We propose to adapt haptic guides based on the mental activity measured by a BCI system. This novel approach is illustrated within a proof-of-concept system: haptic guides are toggled during a path-following task thanks to a mental workload index provided by a BCI. The aim of this system is to provide haptic assistance only when the user's brain activity reflects a high mental workload. A user study conducted with 8 participants shows that our proof-of-concept is operational and exploitable. Results show that activation of haptic guides occurs in the most difficult part of the path-following task. Moreover it allows to increase task performance by 53% by activating assistance only 59% of the time. Taken together, these results suggest that BCI could be used to determine when the user needs assistance during haptic interaction and to enable haptic guides accordingly.Comment: EuroHaptics (2012

    Computer-assisted Teaching in Class Situation: a High-school Math Lab on Vectors

    Get PDF
    published as a volume in Lecture Notes in Computer ScienceInternational audienceThis paper presents our design and experiment of a computerassisted class laboratory on vectors in high-school. Our main goal is to improve the acquisition on notions by all pupils, from advanced level to remedial level. Our secondary goal is to improve the motivation of pupils toward maths and sciences. Our approach consists in making the pupils practice the notions included in a given set of pedagogic objectives, and to do it through real worldinspired problems. We draw on MobiNet, a platform that simulate programmable mobiles on screen and through the network. Our class laboratory consists of a set of pre-programmed interactive simulations, allowing the pupils to react on their errors and to validate each exercise in an autonomous way. This class laboratory was conducted in the same conditions and constraints than an ordinary math lab: same duration, and no preparation of the pupils (but the ordinary math course). Still, the autonomy of the exercises and the network ability would also allow the use with no teacher on place. In this paper, we describe our vector class laboratory experiment: objectives, design, conduction, and evaluation

    ''FlyVIZ'': A Novel Display Device to Provide Humans with 360o Vision by Coupling Catadioptric Camera with HMD.

    Get PDF
    International audienceHave you ever dreamed of having eyes in the back of your head? In this paper we present a novel display device called FlyVIZ which enables humans to experience a real-time 360° vision of their surroundings for the first time. To do so, we combine a panoramic image acquisition system (positioned on top of the user's head) with a Head-Mounted Display (HMD). The omnidirectional images are transformed to fit the characteristics of HMD screens. As a result, the user can see his/her surroundings, in real-time, with 360° images mapped into the HMD field-of- view. We foresee potential applications in different fields where augmented human capacity (an extended field-of-view) could benefit, such as surveillance, security, or entertainment. FlyVIZ could also be used in novel perception and neuroscience studies

    Electrotactile feedback applications for hand and arm interactions: A systematic review, meta-analysis, and future directions

    Get PDF
    Haptic feedback is critical in a broad range of human-machine/computer-interaction applications. However, the high cost and low portability/wearability of haptic devices remain unresolved issues, severely limiting the adoption of this otherwise promising technology. Electrotactile interfaces have the advantage of being more portable and wearable due to their reduced actuators' size, as well as their lower power consumption and manufacturing cost. The applications of electrotactile feedback have been explored in human-computer interaction and human-machine-interaction for facilitating hand-based interactions in applications such as prosthetics, virtual reality, robotic teleoperation, surface haptics, portable devices, and rehabilitation. This paper presents a technological overview of electrotactile feedback, as well a systematic review and meta-analysis of its applications for hand-based interactions. We discuss the different electrotactile systems according to the type of application. We also discuss over a quantitative congregation of the findings, to offer a high-level overview into the state-of-art and suggest future directions. Electrotactile feedback systems showed increased portability/wearability, and they were successful in rendering and/or augmenting most tactile sensations, eliciting perceptual processes, and improving performance in many scenarios. However, knowledge gaps (e.g., embodiment), technical (e.g., recurrent calibration, electrodes' durability) and methodological (e.g., sample size) drawbacks were detected, which should be addressed in future studies.Comment: 18 pages, 1 table, 8 figures, under review in Transactions on Haptics. This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.Upon acceptance of the article by IEEE, the preprint article will be replaced with the accepted versio

    Design and Evaluation of Methods to Prevent Frame Cancellation in Real-Time Stereoscopic Rendering

    Get PDF
    International audienceFrame cancellation comes from the conflict between two depth cues: stereo disparity and occlusion with the screen border. When this conflict occurs, the user suffers from poor depth perception of the scene. It also leads to uncomfortable viewing and eyestrain due to problems in fusing left and right images. In this paper we propose a novel method to avoid frame cancellation in real-time stereoscopic rendering. To solve the disparity/frame occlusion conflict, we propose rendering only the part of the viewing volume that is free of conflict by using clipping methods available in standard real-time 3D APIs. This volume is called the "Stereo Compatible Volume" (SCV) and the method is named "Stereo Compatible Volume Clipping" (SCVC). Black Bands, a proven method initially designed for stereoscopic movies is also implemented to conduct an evaluation. Twenty two people were asked to answer open questions and to score criteria for SCVC, Black Bands and a Control method with no specific treatment. Results show that subjective preference and user's depth perception near screen edge seem improved by SCVC, and that Black Bands did not achieve the performance we expected. At a time when stereoscopic capable hardware is available from the mass consumer market, the disparity/frame occlusion conflict in stereoscopic rendering will become more noticeable. SCVC could be a solution to recommend. SCVC's simplicity of implementation makes the method able to target a wide range of rendering software from VR application to game engine

    Indirect Positioning of a 3D Point on a Soft Object Using RGB-D Visual Servoing and a Mass-Spring Model

    Get PDF
    This work was supported by the GentleMAN (299757) and the BIFROST (313870) projects funded by The Research Council of Norway.International audienceIn this paper, we present a complete pipeline for positioning a feature point of a soft object to a desired 3D position, by acting on a different manipulation point using a robotic manipulator. For that purpose, the analytic relation between the feature point displacement and the robot motion is derived using a coarse mass-spring model (MSM), while taking into consideration the propagation delay introduced by a MSM. From this modeling step, a novel closed-loop controller is designed for performing the positioning task. To get rid of the model approximations, the object is tracked in realtime using a RGB-D sensor, thus allowing to correct on-line any drift between the object and its model. Our model-based and vision-based controller is validated in real experiments for two different soft objects and the results show promising performance in terms of accuracy, efficiency and robustness

    Toward "Pseudo-Haptic Avatars": Modifying the Visual Animation of Self-Avatar Can Simulate the Perception of Weight Lifting

    Get PDF
    International audienceIn this paper we study how the visual animation of a self-avatar can be artificially modified in real-time in order to generate different haptic perceptions. In our experimental setup, participants could watch their self-avatar in a virtual environment in mirror mode while performing a weight lifting task. Users could map their gestures on the self-animated avatar in real-time using a Kinect. We introduce three kinds of modification of the visual animation of the self-avatar according to the effort delivered by the virtual avatar: 1) changes on the spatial mapping between the user's gestures and the avatar, 2) different motion profiles of the animation, and 3) changes in the posture of the avatar (upper-body inclination). The experimental task consisted of a weight lifting task in which participants had to order four virtual dumbbells according to their virtual weight. The user had to lift each virtual dumbbells by means of a tangible stick, the animation of the avatar was modulated according to the virtual weight of the dumbbell. The results showed that the altering the spatial mapping delivered the best performance. Nevertheless, participants globally appreciated all the different visual effects. Our results pave the way to the exploitation of such novel techniques in various VR applications such as sport training, exercise games, or industrial training scenarios in single or collaborative mode

    Rolling Handle for Hand Motion Guidance and Teleoperation

    Get PDF
    International audienceThis paper presents a grounded haptic device able to provide force feedback. The device is composed of a biaxial rocker module and a grounded base which houses two servomotors actuating a mobile platform through three constrained coupling structures. The mobile platform can apply kinesthetic haptic feedback to the user hand, while the biaxial rocker module has two analog channels which can be used to provide inputs to external systems
    • …
    corecore